1,855 research outputs found

    Software Input Pattern and Test Coverage using Computational Linguistics on Structured Data

    Get PDF
    This disclosure describes computational linguistics techniques for software input patterns and test coverage. Structured input data which can have arbitrary and evolving schema, obtained from production software and from testbeds, are tokenized using tree traversal to generate vocabulary, unigram statistics, and bags of words (BoW). BoWs are subjected to statistical analysis to programmatically and intelligently discover software usage patterns in production, to identify test coverage, and to flag gaps in testing

    Hypergraph Neural Networks

    Full text link
    In this paper, we present a hypergraph neural networks (HGNN) framework for data representation learning, which can encode high-order data correlation in a hypergraph structure. Confronting the challenges of learning representation for complex data in real practice, we propose to incorporate such data structure in a hypergraph, which is more flexible on data modeling, especially when dealing with complex data. In this method, a hyperedge convolution operation is designed to handle the data correlation during representation learning. In this way, traditional hypergraph learning procedure can be conducted using hyperedge convolution operations efficiently. HGNN is able to learn the hidden layer representation considering the high-order data structure, which is a general framework considering the complex data correlations. We have conducted experiments on citation network classification and visual object recognition tasks and compared HGNN with graph convolutional networks and other traditional methods. Experimental results demonstrate that the proposed HGNN method outperforms recent state-of-the-art methods. We can also reveal from the results that the proposed HGNN is superior when dealing with multi-modal data compared with existing methods.Comment: Accepted in AAAI'201

    Multi-Energy Blended CBCT Spectral Imaging Using a Spectral Modulator with Flying Focal Spot (SMFFS)

    Full text link
    Cone-beam CT (CBCT) spectral imaging has great potential in medical and industrial applications, but it is very challenging as scatter and spectral effects are seriously twisted. In this work, we present the first attempt to develop a stationary spectral modulator with flying focal spot (SMFFS) technology as a promising, low-cost approach to accurately solving the X-ray scattering problem and physically enabling spectral imaging in a unified framework, and with no significant misalignment in data sampling of spectral projections. Based on an in-depth analysis of optimal energy separation from different combinations of modulator materials and thicknesses, we present a practical design of a mixed two-dimensional spectral modulator that can generate multi-energy blended CBCT spectral projections. To deal with the twisted scatter-spectral challenge, we propose a novel scatter-decoupled material decomposition (SDMD) method by taking advantage of a scatter similarity in SMFFS. A Monte Carlo simulation is conducted to validate the strong similarity of X-ray scatter distributions across the flying focal spot positions. Both numerical simulations using a clinical abdominal CT dataset, and physics experiments on a tabletop CBCT system using a GAMMEX multi-energy CT phantom, are carried out to demonstrate the feasibility of our proposed SDMD method for CBCT spectral imaging with SMFFS. In the physics experiments, the mean relative errors in selected ROI for virtual monochromatic image (VMI) are 0.9\% for SMFFS, and 5.3\% and 16.9\% for 80/120 kV dual-energy cone-beam scan with and without scatter correction, respectively. Our preliminary results show that SMFFS can effectively improve the quantitative imaging performance of CBCT.Comment: 10 pages, 13 figure

    DeSAM: Decoupling Segment Anything Model for Generalizable Medical Image Segmentation

    Full text link
    Deep learning based automatic medical image segmentation models often suffer from domain shift, where the models trained on a source domain do not generalize well to other unseen domains. As a vision foundation model with powerful generalization capabilities, Segment Anything Model (SAM) shows potential for improving the cross-domain robustness of medical image segmentation. However, SAM and its fine-tuned models performed significantly worse in fully automatic mode compared to when given manual prompts. Upon further investigation, we discovered that the degradation in performance was related to the coupling effect of poor prompts and mask segmentation. In fully automatic mode, the presence of inevitable poor prompts (such as points outside the mask or boxes significantly larger than the mask) can significantly mislead mask generation. To address the coupling effect, we propose the decoupling SAM (DeSAM). DeSAM modifies SAM's mask decoder to decouple mask generation and prompt embeddings while leveraging pre-trained weights. We conducted experiments on publicly available prostate cross-site datasets. The results show that DeSAM improves dice score by an average of 8.96% (from 70.06% to 79.02%) compared to previous state-of-the-art domain generalization method. Moreover, DeSAM can be trained on personal devices with entry-level GPU since our approach does not rely on tuning the heavyweight image encoder. The code is publicly available at https://github.com/yifangao112/DeSAM.Comment: 12 pages. The code is available at https://github.com/yifangao112/DeSA
    • …
    corecore